Oregon County
WildfireGenome: Interpretable Machine Learning Reveals Local Drivers of Wildfire Risk and Their Cross-County Variation
Current wildfire risk assessments rely on coarse hazard maps and opaque machine learning models that optimize regional accuracy while sacrificing interpretability at the decision scale. WildfireGenome addresses these gaps through three components: (1) fusion of seven federal wildfire indicators into a sign-aligned, PCA-based composite risk label at H3 Level-8 resolution; (2) Random Forest classification of local wildfire risk; and (3) SHAP and ICE/PDP analyses to expose county-specific nonlinear driver relationships. Across seven ecologically diverse U.S. counties, models achieve accuracies of 0.755-0.878 and Quadratic Weighted Kappa up to 0.951, with principal components explaining 87-94% of indicator variance. Transfer tests show reliable performance between ecologically similar regions but collapse across dissimilar contexts. Explanations consistently highlight needleleaf forest cover and elevation as dominant drivers, with risk rising sharply at 30-40% needleleaf coverage. WildfireGenome advances wildfire risk assessment from regional prediction to interpretable, decision-scale analytics that guide vegetation management, zoning, and infrastructure planning.
- North America > United States > Arkansas > Cross County (0.41)
- North America > United States > California > Sonoma County (0.14)
- North America > United States > Texas > Brazos County > College Station (0.14)
- (17 more...)
- Information Technology > Security & Privacy (0.69)
- Energy (0.68)
- Government > Regional Government > North America Government > United States Government (0.46)
Austin city agency offers racially segregated 'anti-racist' trainings for 'white folks' and 'people of color'
Fox News host Greg Gutfeld goes over this weeks leftovers and Gutfeld! reacts to the resurfacing of an old training video on DEI by former Navy DEI director Dr. Charles Chuck Barber. A city agency in Austin, Texas invited employees to racially segregated "anti-racist" meetings where "white folks" were asked not to attend a meeting that was only for "people of color." A January email obtained by Fox News Digital reveals Austin's Parks & Recreation Department's equity and inclusion coordinator invited employees to attend "Antiracist Affinity Spaces," consisting of two separate trainings segregated by race as part of an "Equity and Inclusion program." "For People of Color*: Once a month, PARD employees of color will meet up at various city sites," the email says. "The first 1.5 hours will be for fostering dialogue and the last 30 minutes will be for networking. This monthly space will offer folks the opportunities to gather and connect with other PARD employees of color, share about our personal and professional experiences with racism, and learn about mentoring and job opportunities for professional development."
- North America > United States > Texas > Travis County > Austin (0.27)
- North America > United States > Oregon (0.05)
- North America > United States > Missouri > Oregon County (0.05)
- (4 more...)
Question Answering as Programming for Solving Time-Sensitive Questions
Zhu, Xinyu, Yang, Cheng, Chen, Bei, Li, Siheng, Lou, Jian-Guang, Yang, Yujiu
Question answering plays a pivotal role in human daily life because it involves our acquisition of knowledge about the world. However, due to the dynamic and ever-changing nature of real-world facts, the answer can be completely different when the time constraint in the question changes. Recently, Large Language Models (LLMs) have shown remarkable intelligence in question answering, while our experiments reveal that the aforementioned problems still pose a significant challenge to existing LLMs. This can be attributed to the LLMs' inability to perform rigorous reasoning based on surface-level text semantics. To overcome this limitation, rather than requiring LLMs to directly answer the question, we propose a novel approach where we reframe the $\textbf{Q}$uestion $\textbf{A}$nswering task $\textbf{a}$s $\textbf{P}$rogramming ($\textbf{QAaP}$). Concretely, by leveraging modern LLMs' superior capability in understanding both natural language and programming language, we endeavor to harness LLMs to represent diversely expressed text as well-structured code and select the best matching answer from multiple candidates through programming. We evaluate our QAaP framework on several time-sensitive question answering datasets and achieve decent improvement, up to $14.5$% over strong baselines. Our codes and data are available at https://github.com/TianHongZXY/qaap
- Asia > China > Liaoning Province > Dalian (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Oregon > Klamath County > Klamath Falls (0.04)
- (8 more...)
- Personal (0.93)
- Research Report > New Finding (0.46)
- Education (0.93)
- Government > Regional Government > North America Government > United States Government (0.93)
- Leisure & Entertainment > Sports > Soccer (0.68)
The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default
Mittelstadt, Brent, Wachter, Sandra, Russell, Chris
In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Oregon > Benton County (0.04)
- North America > United States > Missouri > Oregon County (0.04)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Government > Regional Government (1.00)
- Education > Educational Setting (0.67)
Large Language Models Can Be Strong Differentially Private Learners
Li, Xuechen, Tramèr, Florian, Liang, Percy, Hashimoto, Tatsunori
Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead. We show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure. With the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines -- by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model. The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models doesn't tend to suffer from dimension-dependent performance degradation. Code to reproduce results can be found at https://github.com/lxuechen/private-transformers.
- North America > United States > Oregon > Linn County > Albany (0.14)
- North America > United States > District of Columbia > Washington (0.14)
- Europe > Spain > Galicia > Madrid (0.04)
- (6 more...)
- Information Technology > Security & Privacy (1.00)
- Consumer Products & Services (0.68)
- Government (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.97)
Why Is Google Slow-Walking Its Breakthroughs in AI?
Google became what it is by creating advanced new technology and throwing it open to all. Giant businesses and individuals alike can use the company's search and email services, or tap its targeting algorithms and vast audience for ad campaigns. Yet Google's progress on artificial intelligence now appears to have the company rethinking its do-what-you-will approach. The company has begun withholding or restricting some of its AI research and services, to protect the public from misuse. Google CEO Sundar Pichai has made "AI first" a company slogan, but the company's wariness of AI's power has sometimes let its competitors lead instead.
- North America > United States > Oregon > Washington County (0.05)
- North America > United States > Missouri > Oregon County (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.91)
- Law (0.74)
California could become first to limit facial recognition technology; police aren't happy
San Francisco supervisors approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction. SAN FRANCISCO – A routine traffic stop goes dangerously awry when a police officer's body camera uses its built-in facial recognition software to misidentify a motorist as a convicted felon. At best, lawsuits are launched. That imaginary scenario is what some California lawmakers are trying to avoid by supporting Assembly Bill 1215, the Body Camera Accountability Act, which would ban the use of facial recognition software in police body cams – a national first if it passes a Senate vote this summer and is signed by Gov. Gavin Newsom. State law enforcement officials here do not now employ the technology to scan those in the line of sight of officers.
- North America > United States > California > San Francisco County > San Francisco (0.47)
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- North America > United States > New York (0.06)
- (9 more...)
Police Are Feeding Celebrity Photos into Facial Recognition Software to Solve Crimes
Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > New York (0.08)
- North America > United States > Oregon > Washington County (0.05)
- (3 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government (1.00)
Facial recognition tech sucks, but it's inevitable
These are just some of the questions being raised by lawmakers, civil libertarians, and privacy advocates in the wake of an ACLU report released last summer that claimed Amazon's facial recognition software, Rekognition, misidentified 28 members of congress as criminals. Rekognition is a general-purpose, application programming interface (API) developers can use to build applications that can detect and analyze scenes, objects, faces, and other items within images. The source of the controversy was a pilot program in which Amazon teamed up with the police departments of two cities, Orlando, Florida and Washington County, Oregon, to explore the use of facial recognition in law enforcement. In January 2019, the Daily Mail reported that the FBI has been testing Rekognition since early 2018. The Project on Government Oversight also revealed via a Freedom of Information Act request that Amazon had also pitched Rekognition to ICE in June 2018.
- North America > United States > Oregon > Washington County (0.25)
- North America > United States > Missouri > Oregon County (0.25)
- North America > United States > Florida > Orange County > Orlando (0.25)
- (2 more...)
Amazon Joins Microsoft's Call for Rules on Facial Recognition
In Washington County, Oregon, sheriff's deputies use a mobile app to send photos of suspects to Amazon's cloud computing service. The e-commerce giant's algorithms check those faces against a database of tens of thousands of mugshots, using Amazon's Rekognition image analysis service. Such use of facial recognition by law enforcement is essentially unregulated. But some developers of the technology want to change that. In a blog post Thursday, Amazon asked Congress to put some rules around the use of the technology, echoing a call by Microsoft in December.
- North America > United States > Oregon > Washington County (0.25)
- North America > United States > Missouri > Oregon County (0.25)
- North America > United States > Washington (0.05)
- (2 more...)